Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Med Phys ; 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38629779

RESUMEN

BACKGROUND: Contrast-enhanced computed tomography (CECT) provides much more information compared to non-enhanced CT images, especially for the differentiation of malignancies, such as liver carcinomas. Contrast media injection phase information is usually missing on public datasets and not standardized in the clinic even in the same region and language. This is a barrier to effective use of available CECT images in clinical research. PURPOSE: The aim of this study is to detect contrast media injection phase from CT images by means of organ segmentation and machine learning algorithms. METHODS: A total number of 2509 CT images split into four subsets of non-contrast (class #0), arterial (class #1), venous (class #2), and delayed (class #3) after contrast media injection were collected from two CT scanners. Seven organs including the liver, spleen, heart, kidneys, lungs, urinary bladder, and aorta along with body contour masks were generated by pre-trained deep learning algorithms. Subsequently, five first-order statistical features including average, standard deviation, 10, 50, and 90 percentiles extracted from the above-mentioned masks were fed to machine learning models after feature selection and reduction to classify the CT images in one of four above mentioned classes. A 10-fold data split strategy was followed. The performance of our methodology was evaluated in terms of classification accuracy metrics. RESULTS: The best performance was achieved by Boruta feature selection and RF model with average area under the curve of more than 0.999 and accuracy of 0.9936 averaged over four classes and 10 folds. Boruta feature selection selected all predictor features. The lowest classification was observed for class #2 (0.9888), which is already an excellent result. In the 10-fold strategy, only 33 cases from 2509 cases (∼1.4%) were misclassified. The performance over all folds was consistent. CONCLUSIONS: We developed a fast, accurate, reliable, and explainable methodology to classify contrast media phases which may be useful in data curation and annotation in big online datasets or local datasets with non-standard or no series description. Our model containing two steps of deep learning and machine learning may help to exploit available datasets more effectively.

2.
Quant Imaging Med Surg ; 14(3): 2146-2164, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38545051

RESUMEN

Background: Positron emission tomography (PET) imaging encounters the obstacle of partial volume effects, arising from its limited intrinsic resolution, giving rise to (I) considerable bias, particularly for structures comparable in size to the point spread function (PSF) of the system; and (II) blurred image edges and blending of textures along the borders. We set out to build a deep learning-based framework for predicting partial volume corrected full-dose (FD + PVC) images from either standard or low-dose (LD) PET images without requiring any anatomical data in order to provide a joint solution for partial volume correction and de-noise LD PET images. Methods: We trained a modified encoder-decoder U-Net network with standard of care or LD PET images as the input and FD + PVC images by six different PVC methods as the target. These six PVC approaches include geometric transfer matrix (GTM), multi-target correction (MTC), region-based voxel-wise correction (RBV), iterative Yang (IY), reblurred Van-Cittert (RVC), and Richardson-Lucy (RL). The proposed models were evaluated using standard criteria, such as peak signal-to-noise ratio (PSNR), root mean squared error (RMSE), structural similarity index (SSIM), relative bias, and absolute relative bias. Results: Different levels of error were observed for these partial volume correction methods, which were relatively smaller for GTM with a SSIM of 0.63 for LD and 0.29 for FD, IY with an SSIM of 0.63 for LD and 0.67 for FD, RBV with an SSIM of 0.57 for LD and 0.65 for FD, and RVC with an SSIM of 0.89 for LD and 0.94 for FD PVC approaches. However, large quantitative errors were observed for multi-target MTC with an RMSE of 2.71 for LD and 2.45 for FD and RL with an RMSE of 5 for LD and 3.27 for FD PVC approaches. Conclusions: We found that the proposed framework could effectively perform joint de-noising and partial volume correction for PET images with LD and FD input PET data (LD vs. FD). When no magnetic resonance imaging (MRI) images are available, the developed deep learning models could be used for partial volume correction on LD or standard PET-computed tomography (PET-CT) scans as an image quality enhancement technique.

3.
Phys Med ; 119: 103315, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38377837

RESUMEN

PURPOSE: This work set out to propose an attention-based deep neural network to predict partial volume corrected images from PET data not utilizing anatomical information. METHODS: An attention-based convolutional neural network (ATB-Net) is developed to predict PVE-corrected images in brain PET imaging by concentrating on anatomical areas of the brain. The performance of the deep neural network for performing PVC without using anatomical images was evaluated for two PVC methods, including iterative Yang (IY) and reblurred Van-Cittert (RVC) approaches. The RVC and IY PVC approaches were applied to PET images to generate the reference images. The training of the U-Net network for the partial volume correction was trained twice, once without using the attention module and once with the attention module concentrating on the anatomical brain regions. RESULTS: Regarding the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and root mean square error (RMSE) metrics, the proposed ATB-Net outperformed the standard U-Net model (without attention compartment). For the RVC technique, the ATB-Net performed just marginally better than the U-Net; however, for the IY method, which is a region-wise method, the attention-based approach resulted in a substantial improvement. The mean absolute relative SUV difference and mean absolute relative bias improved by 38.02 % and 91.60 % for the RVC method and 77.47 % and 79.68 % for the IY method when using the ATB-Net model, respectively. CONCLUSIONS: Our results propose that without using anatomical data, the attention-based DL model could perform PVC on PET images, which could be employed for PVC in PET imaging.


Asunto(s)
Encéfalo , Fluorodesoxiglucosa F18 , Encéfalo/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía de Emisión de Positrones/métodos , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador/métodos
4.
Ann Nucl Med ; 38(1): 31-70, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37952197

RESUMEN

We focus on reviewing state-of-the-art developments of dedicated PET scanners with irregular geometries and the potential of different aspects of multifunctional PET imaging. First, we discuss advances in non-conventional PET detector geometries. Then, we present innovative designs of organ-specific dedicated PET scanners for breast, brain, prostate, and cardiac imaging. We will also review challenges and possible artifacts by image reconstruction algorithms for PET scanners with irregular geometries, such as non-cylindrical and partial angular coverage geometries and how they can be addressed. Then, we attempt to address some open issues about cost/benefits analysis of dedicated PET scanners, how far are the theoretical conceptual designs from the market/clinic, and strategies to reduce fabrication cost without compromising performance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Humanos , Fantasmas de Imagen , Tomografía de Emisión de Positrones/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo , Algoritmos
5.
Eur J Nucl Med Mol Imaging ; 51(3): 734-748, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37897616

RESUMEN

PURPOSE: To investigate the impact of reduced injected doses on the quantitative and qualitative assessment of the amyloid PET tracers [18F]flutemetamol and [18F]florbetaben. METHODS: Cognitively impaired and unimpaired individuals (N = 250, 36% Aß-positive) were included and injected with [18F]flutemetamol (N = 175) or [18F]florbetaben (N = 75). PET scans were acquired in list-mode (90-110 min post-injection) and reduced-dose images were simulated to generate images of 75, 50, 25, 12.5 and 5% of the original injected dose. Images were reconstructed using vendor-provided reconstruction tools and visually assessed for Aß-pathology. SUVRs were calculated for a global cortical and three smaller regions using a cerebellar cortex reference tissue, and Centiloid was computed. Absolute and percentage differences in SUVR and CL were calculated between dose levels, and the ability to discriminate between Aß- and Aß + scans was evaluated using ROC analyses. Finally, intra-reader agreement between the reduced dose and 100% images was evaluated. RESULTS: At 5% injected dose, change in SUVR was 3.72% and 3.12%, with absolute change in Centiloid 3.35CL and 4.62CL, for [18F]flutemetamol and [18F]florbetaben, respectively. At 12.5% injected dose, percentage change in SUVR and absolute change in Centiloid were < 1.5%. AUCs for discriminating Aß- from Aß + scans were high (AUC ≥ 0.94) across dose levels, and visual assessment showed intra-reader agreement of > 80% for both tracers. CONCLUSION: This proof-of-concept study showed that for both [18F]flutemetamol and [18F]florbetaben, adequate quantitative and qualitative assessments can be obtained at 12.5% of the original injected dose. However, decisions to reduce the injected dose should be made considering the specific clinical or research circumstances.


Asunto(s)
Enfermedad de Alzheimer , Compuestos de Anilina , Estilbenos , Humanos , Benzotiazoles , Amiloide/metabolismo , Tomografía de Emisión de Positrones/métodos , Enfermedad de Alzheimer/diagnóstico por imagen , Péptidos beta-Amiloides/metabolismo , Encéfalo/metabolismo
6.
Comput Med Imaging Graph ; 110: 102315, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38006648

RESUMEN

INTRODUCTION: Low-dose and fast PET imaging (low-count PET) play a significant role in enhancing patient safety, healthcare efficiency, and patient comfort during medical imaging procedures. To achieve high-quality images with low-count PET scans, effective reconstruction models are crucial for denoising and enhancing image quality. The main goal of this paper is to develop an effective and accurate deep learning-based method for reconstructing low-count PET images, which is a challenging problem due to the limited amount of available data and the high level of noise in the acquired images. The proposed method aims to improve the quality of reconstructed PET images while preserving important features, such as edges and small details, by combining the strengths of UNET and Transformer networks. MATERIAL AND METHODS: The proposed TrUNET-MAPEM model integrates a residual UNET-transformer regularizer into the unrolled maximum a posteriori expectation maximization (MAPEM) algorithm for PET image reconstruction. A loss function based on a combination of structural similarity index (SSIM) and mean squared error (MSE) is utilized to evaluate the accuracy of the reconstructed images. The simulated dataset was generated using the Brainweb phantom, while the real patient dataset was acquired using a Siemens Biograph mMR PET scanner. We also implemented state-of-the-art methods for comparison purposes: OSEM, MAPOSEM, and supervised learning using 3D-UNET network. The reconstructed images are compared to ground truth images using metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and relative root mean square error (rRMSE) to quantitatively evaluate the accuracy of the reconstructed images. RESULTS: Our proposed TrUNET-MAPEM approach was evaluated using both simulated and real patient data. For the patient data, our model achieved an average PSNR of 33.72 dB, an average SSIM of 0.955, and an average rRMSE of 0.39. These results outperformed other methods which had average PSNRs of 36.89 dB, 34.12 dB, and 33.52 db, average SSIMs of 0.944, 0.947, and 0.951, and average rRMSEs of 0.59, 0.49, and 0.42. For the simulated data, our model achieved an average PSNR of 31.23 dB, an average SSIM of 0.95, and an average rRMSE of 0.55. These results also outperformed other state-of-the-art methods, such as OSEM, MAPOSEM, and 3DUNET-MAPEM. The model demonstrates the potential for clinical use by successfully reconstructing smooth images while preserving edges. The comparison with other methods demonstrates the superiority of our approach, as it outperforms all other methods for all three metrics. CONCLUSION: The proposed TrUNET-MAPEM model presents a significant advancement in the field of low-count PET image reconstruction. The results demonstrate the potential for clinical use, as the model can produce images with reduced noise levels and better edge preservation compared to other reconstruction and post-processing algorithms. The proposed approach may have important clinical applications in the early detection and diagnosis of various diseases.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Emisión de Positrones , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Algoritmos , Fantasmas de Imagen
7.
Clin Nucl Med ; 48(12): 1035-1046, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37883015

RESUMEN

PURPOSE: Medical imaging artifacts compromise image quality and quantitative analysis and might confound interpretation and misguide clinical decision-making. The present work envisions and demonstrates a new paradigm PET image Quality Assurance NETwork (PET-QA-NET) in which various image artifacts are detected and disentangled from images without prior knowledge of a standard of reference or ground truth for routine PET image quality assurance. METHODS: The network was trained and evaluated using training/validation/testing data sets consisting of 669/100/100 artifact-free oncological 18 F-FDG PET/CT images and subsequently fine-tuned and evaluated on 384 (20% for fine-tuning) scans from 8 different PET centers. The developed DL model was quantitatively assessed using various image quality metrics calculated for 22 volumes of interest defined on each scan. In addition, 200 additional 18 F-FDG PET/CT scans (this time with artifacts), generated using both CT-based attenuation and scatter correction (routine PET) and PET-QA-NET, were blindly evaluated by 2 nuclear medicine physicians for the presence of artifacts, diagnostic confidence, image quality, and the number of lesions detected in different body regions. RESULTS: Across the volumes of interest of 100 patients, SUV MAE values of 0.13 ± 0.04, 0.24 ± 0.1, and 0.21 ± 0.06 were reached for SUV mean , SUV max , and SUV peak , respectively (no statistically significant difference). Qualitative assessment showed a general trend of improved image quality and diagnostic confidence and reduced image artifacts for PET-QA-NET compared with routine CT-based attenuation and scatter correction. CONCLUSION: We developed a highly effective and reliable quality assurance tool that can be embedded routinely to detect and correct for 18 F-FDG PET image artifacts in clinical setting with notably improved PET image quality and quantitative capabilities.


Asunto(s)
Fluorodesoxiglucosa F18 , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Inteligencia Artificial , Artefactos , Tomografía de Emisión de Positrones/métodos , Procesamiento de Imagen Asistido por Computador/métodos
8.
Eur J Nucl Med Mol Imaging ; 51(1): 40-53, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37682303

RESUMEN

PURPOSE: Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS: Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS: The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION: The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.


Asunto(s)
Tomografía Computarizada por Tomografía de Emisión de Positrones , Neoplasias de la Próstata , Masculino , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Artefactos , Radioisótopos de Galio , Privacidad , Tomografía de Emisión de Positrones/métodos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
9.
Insights Imaging ; 14(1): 141, 2023 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-37620554

RESUMEN

PURPOSE: This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. METHODS: The publicly available training dataset provided for the 2021 RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge was used in this study, consisting of 1251 multi-institutional, multi-parametric MR images. Post-contrast T1, T2, and T2 FLAIR images as well as ground truth manual segmentation were used as input for the model. The data were split into a training set of 1151 cases and testing set of 100 cases, with the testing set remaining constant throughout. Deep convolutional neural network segmentation models were trained using the NiftyNet platform. To test the viability of active learning in training a segmentation model, an initial reference model was trained using all 1151 training cases followed by two additional models using only 575 cases and 100 cases. The resulting predicted segmentations of these two additional models on the remaining training cases were then addended to the training dataset for additional training. RESULTS: It was demonstrated that an active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas (0.906 reference Dice score vs 0.868 active learning Dice score) while only requiring manual annotation for 28.6% of the data. CONCLUSION: The active learning approach when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data. CRITICAL RELEVANCE STATEMENT: Active learning concepts were applied to a deep learning-assisted segmentation of brain gliomas from MR images to assess their viability in reducing the required amount of manually annotated ground truth data in model training. KEY POINTS: • This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. • The active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas. • Active learning when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.

10.
Phys Med Biol ; 68(16)2023 07 31.
Artículo en Inglés | MEDLINE | ID: mdl-37327792

RESUMEN

Objective. Cerebral CT perfusion (CTP) imaging is most commonly used to diagnose acute ischaemic stroke and support treatment decisions. Shortening CTP scan duration is desirable to reduce the accumulated radiation dose and the risk of patient head movement. In this study, we present a novel application of a stochastic adversarial video prediction approach to reduce CTP imaging acquisition time.Approach. A variational autoencoder and generative adversarial network (VAE-GAN) were implemented in a recurrent framework in three scenarios: to predict the last 8 (24 s), 13 (31.5 s) and 18 (39 s) image frames of the CTP acquisition from the first 25 (36 s), 20 (28.5 s) and 15 (21 s) acquired frames, respectively. The model was trained using 65 stroke cases and tested on 10 unseen cases. Predicted frames were assessed against ground-truth in terms of image quality and haemodynamic maps, bolus shape characteristics and volumetric analysis of lesions.Main results. In all three prediction scenarios, the mean percentage error between the area, full-width-at-half-maximum and maximum enhancement of the predicted and ground-truth bolus curve was less than 4 ± 4%. The best peak signal-to-noise ratio and structural similarity of predicted haemodynamic maps was obtained for cerebral blood volume followed (in order) by cerebral blood flow, mean transit time and time to peak. For the 3 prediction scenarios, average volumetric error of the lesion was overestimated by 7%-15%, 11%-28% and 7%-22% for the infarct, penumbra and hypo-perfused regions, respectively, and the corresponding spatial agreement for these regions was 67%-76%, 76%-86% and 83%-92%.Significance. This study suggests that a recurrent VAE-GAN could potentially be used to predict a portion of CTP frames from truncated acquisitions, preserving the majority of clinical content in the images, and potentially reducing the scan duration and radiation dose simultaneously by 65% and 54.5%, respectively.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular , Humanos , Accidente Cerebrovascular/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Imagen de Perfusión/métodos , Circulación Cerebrovascular/fisiología , Dosis de Radiación
11.
Eur J Nucl Med Mol Imaging ; 50(7): 1881-1896, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36808000

RESUMEN

PURPOSE: Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. METHODS: Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. RESULTS: The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: - 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: - 0.26, + 0.24 SUV, mean = - 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. CONCLUSION: An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required.


Asunto(s)
Compuestos de Anilina , Fluorodesoxiglucosa F18 , Humanos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
12.
Eur J Nucl Med Mol Imaging ; 50(4): 1034-1050, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36508026

RESUMEN

PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones/métodos , Imagen por Resonancia Magnética/métodos
13.
Sci Rep ; 12(1): 14817, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-36050434

RESUMEN

We aimed to construct a prediction model based on computed tomography (CT) radiomics features to classify COVID-19 patients into severe-, moderate-, mild-, and non-pneumonic. A total of 1110 patients were studied from a publicly available dataset with 4-class severity scoring performed by a radiologist (based on CT images and clinical features). The entire lungs were segmented and followed by resizing, bin discretization and radiomic features extraction. We utilized two feature selection algorithms, namely bagging random forest (BRF) and multivariate adaptive regression splines (MARS), each coupled to a classifier, namely multinomial logistic regression (MLR), to construct multiclass classification models. The dataset was divided into 50% (555 samples), 20% (223 samples), and 30% (332 samples) for training, validation, and untouched test datasets, respectively. Subsequently, nested cross-validation was performed on train/validation to select the features and tune the models. All predictive power indices were reported based on the testing set. The performance of multi-class models was assessed using precision, recall, F1-score, and accuracy based on the 4 × 4 confusion matrices. In addition, the areas under the receiver operating characteristic curves (AUCs) for multi-class classifications were calculated and compared for both models. Using BRF, 23 radiomic features were selected, 11 from first-order, 9 from GLCM, 1 GLRLM, 1 from GLDM, and 1 from shape. Ten features were selected using the MARS algorithm, namely 3 from first-order, 1 from GLDM, 1 from GLRLM, 1 from GLSZM, 1 from shape, and 3 from GLCM features. The mean absolute deviation, skewness, and variance from first-order and flatness from shape, and cluster prominence from GLCM features and Gray Level Non Uniformity Normalize from GLRLM were selected by both BRF and MARS algorithms. All selected features by BRF or MARS were significantly associated with four-class outcomes as assessed within MLR (All p values < 0.05). BRF + MLR and MARS + MLR resulted in pseudo-R2 prediction performances of 0.305 and 0.253, respectively. Meanwhile, there was a significant difference between the feature selection models when using a likelihood ratio test (p value = 0.046). Based on confusion matrices for BRF + MLR and MARS + MLR algorithms, the precision was 0.856 and 0.728, the recall was 0.852 and 0.722, whereas the accuracy was 0.921 and 0.861, respectively. AUCs (95% CI) for multi-class classification were 0.846 (0.805-0.887) and 0.807 (0.752-0.861) for BRF + MLR and MARS + MLR algorithms, respectively. Our models based on the utilization of radiomic features, coupled with machine learning were able to accurately classify patients according to the severity of pneumonia, thus highlighting the potential of this emerging paradigm in the prognostication and management of COVID-19 patients.


Asunto(s)
COVID-19 , Algoritmos , COVID-19/diagnóstico por imagen , Humanos , Aprendizaje Automático , Curva ROC , Tomografía Computarizada por Rayos X/métodos
14.
Hum Brain Mapp ; 43(16): 5032-5043, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36087092

RESUMEN

We aim to synthesize brain time-of-flight (TOF) PET images/sinograms from their corresponding non-TOF information in the image space (IS) and sinogram space (SS) to increase the signal-to-noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18 F-FDG PET/CT scans were collected to generate TOF and non-TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non-TOF. Wide-ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF-PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non-TOF-PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (-0.02%) and minimum variance (95% CI: -0.17%, +0.21%) were achieved for TOF-PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non-TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non-TOF PET images to achieve better image quality.


Asunto(s)
Aprendizaje Profundo , Fluorodesoxiglucosa F18 , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones , Tomografía Computarizada por Tomografía de Emisión de Positrones , Encéfalo/diagnóstico por imagen
15.
Phys Med Biol ; 67(15)2022 07 29.
Artículo en Inglés | MEDLINE | ID: mdl-35803249

RESUMEN

Organ-specific PET scanners have been developed to provide both high spatial resolution and sensitivity, although the deployment of several dedicated PET scanners at the same center is costly and space-consuming. Active-PET is a multifunctional PET scanner design exploiting the advantages of two different types of detector modules and mechanical arms mechanisms enabling repositioning of the detectors to allow the implementation of different geometries/configurations. Active-PET can be used for different applications, including brain, axilla, breast, prostate, whole-body, preclinical and pediatrics imaging, cell tracking, and image guidance for therapy. Monte Carlo techniques were used to simulate a PET scanner with two sets of high resolution and high sensitivity pixelated Lutetium Oxyorthoscilicate (LSO(Ce)) detector blocks (24 for each group, overall 48 detector modules for each ring), one with large pixel size (4 × 4 mm2) and crystal thickness (20 mm), and another one with small pixel size (2 × 2 mm2) and thickness (10 mm). Each row of detector modules is connected to a linear motor that can displace the detectors forward and backward along the radial axis to achieve variable gantry diameter in order to image the target subject at the optimal/desired resolution and/or sensitivity. At the center of the field-of-view, the highest sensitivity (15.98 kcps MBq-1) was achieved by the scanner with a small gantry and high-sensitivity detectors while the best spatial resolution was obtained by the scanner with a small gantry and high-resolution detectors (2.2 mm, 2.3 mm, 2.5 mm FWHM for tangential, radial, and axial, respectively). The configuration with large-bore (combination of high-resolution and high-sensitivity detectors) achieved better performance and provided higher image quality compared to the Biograph mCT as reflected by the 3D Hoffman brain phantom simulation study. We introduced the concept of a non-static PET scanner capable of switching between large and small field-of-view as well as high-resolution and high-sensitivity imaging.


Asunto(s)
Tomografía de Emisión de Positrones , Niño , Simulación por Computador , Diseño de Equipo , Humanos , Masculino , Método de Montecarlo , Fantasmas de Imagen , Tomografía de Emisión de Positrones/métodos
16.
Comput Biol Med ; 145: 105467, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35378436

RESUMEN

BACKGROUND: We aimed to analyze the prognostic power of CT-based radiomics models using data of 14,339 COVID-19 patients. METHODS: Whole lung segmentations were performed automatically using a deep learning-based model to extract 107 intensity and texture radiomics features. We used four feature selection algorithms and seven classifiers. We evaluated the models using ten different splitting and cross-validation strategies, including non-harmonized and ComBat-harmonized datasets. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were reported. RESULTS: In the test dataset (4,301) consisting of CT and/or RT-PCR positive cases, AUC, sensitivity, and specificity of 0.83 ± 0.01 (CI95%: 0.81-0.85), 0.81, and 0.72, respectively, were obtained by ANOVA feature selector + Random Forest (RF) classifier. Similar results were achieved in RT-PCR-only positive test sets (3,644). In ComBat harmonized dataset, Relief feature selector + RF classifier resulted in the highest performance of AUC, reaching 0.83 ± 0.01 (CI95%: 0.81-0.85), with a sensitivity and specificity of 0.77 and 0.74, respectively. ComBat harmonization did not depict statistically significant improvement compared to a non-harmonized dataset. In leave-one-center-out, the combination of ANOVA feature selector and RF classifier resulted in the highest performance. CONCLUSION: Lung CT radiomics features can be used for robust prognostic modeling of COVID-19. The predictive power of the proposed CT radiomics model is more reliable when using a large multicentric heterogeneous dataset, and may be used prospectively in clinical setting to manage COVID-19 patients.


Asunto(s)
COVID-19 , Neoplasias Pulmonares , Algoritmos , COVID-19/diagnóstico por imagen , Humanos , Aprendizaje Automático , Pronóstico , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
17.
Clin Nucl Med ; 47(7): 606-617, 2022 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35442222

RESUMEN

PURPOSE: The generalizability and trustworthiness of deep learning (DL)-based algorithms depend on the size and heterogeneity of training datasets. However, because of patient privacy concerns and ethical and legal issues, sharing medical images between different centers is restricted. Our objective is to build a federated DL-based framework for PET image segmentation utilizing a multicentric dataset and to compare its performance with the centralized DL approach. METHODS: PET images from 405 head and neck cancer patients from 9 different centers formed the basis of this study. All tumors were segmented manually. PET images converted to SUV maps were resampled to isotropic voxels (3 × 3 × 3 mm3) and then normalized. PET image subvolumes (12 × 12 × 12 cm3) consisting of whole tumors and background were analyzed. Data from each center were divided into train/validation (80% of patients) and test sets (20% of patients). The modified R2U-Net was used as core DL model. A parallel federated DL model was developed and compared with the centralized approach where the data sets are pooled to one server. Segmentation metrics, including Dice similarity and Jaccard coefficients, percent relative errors (RE%) of SUVpeak, SUVmean, SUVmedian, SUVmax, metabolic tumor volume, and total lesion glycolysis were computed and compared with manual delineations. RESULTS: The performance of the centralized versus federated DL methods was nearly identical for segmentation metrics: Dice (0.84 ± 0.06 vs 0.84 ± 0.05) and Jaccard (0.73 ± 0.08 vs 0.73 ± 0.07). For quantitative PET parameters, we obtained comparable RE% for SUVmean (6.43% ± 4.72% vs 6.61% ± 5.42%), metabolic tumor volume (12.2% ± 16.2% vs 12.1% ± 15.89%), and total lesion glycolysis (6.93% ± 9.6% vs 7.07% ± 9.85%) and negligible RE% for SUVmax and SUVpeak. No significant differences in performance (P > 0.05) between the 2 frameworks (centralized vs federated) were observed. CONCLUSION: The developed federated DL model achieved comparable quantitative performance with respect to the centralized DL model. Federated DL models could provide robust and generalizable segmentation, while addressing patient privacy and legal and ethical issues in clinical data sharing.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones
18.
J Digit Imaging ; 35(3): 469-481, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35137305

RESUMEN

A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.


Asunto(s)
Aprendizaje Profundo , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Neuroimagen/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones
19.
Int J Imaging Syst Technol ; 32(1): 12-25, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34898850

RESUMEN

We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347'259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7'333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98-0.99) and 0.91 ± 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, -0.12 to 0.18) and -0.18 ± 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16-0.59) and 0.81 ± 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (-6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.

20.
Neuroimage ; 245: 118697, 2021 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-34742941

RESUMEN

PURPOSE: Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS: Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS: SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION: The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Enfermedades Neurodegenerativas/diagnóstico por imagen , Neuroimagen/métodos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Anciano , Bases de Datos Factuales , Femenino , Humanos , Masculino , Relación Señal-Ruido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...